Jiangxi Province
She Was Given Up by Her Chinese Parents--and Spent 14 Years Trying to Find a Way Back
More and more Chinese adoptees in the US are trying to reunite with their birth parents. For Youxue, it took more than a decade, and a remarkable coincidence. A girl is found on a street in Ma'Anshan, China, in May 1993. Her paternal grandfather, the story goes, set her down and walked away. It's unclear how long she's been outside when somebody arrives and takes her to the orphanage. A white woman adopts the girl and brings her to America in August 1994. She gives her an English name. In spring 2010, when Youxue (her Chinese name) was a high school sophomore in Dallas, Texas, she decided to start searching for her birth parents.
- North America > United States > Texas > Dallas County > Dallas (0.24)
- North America > United States > California (0.14)
- Asia > China > Anhui Province (0.05)
- (5 more...)
- Information Technology (0.70)
- Health & Medicine > Therapeutic Area (0.47)
- Education > Educational Setting (0.34)
Federated Distillation Assisted Vehicle Edge Caching Scheme Based on Lightweight DDPM
Li, Xun, Wu, Qiong, Fan, Pingyi, Wang, Kezhi, Chen, Wen, Letaief, Khaled B.
Vehicle edge caching is a promising technology that can significantly reduce the latency for vehicle users (VUs) to access content by pre-caching user-interested content at edge nodes. It is crucial to accurately predict the content that VUs are interested in without exposing their privacy. Traditional federated learning (FL) can protect user privacy by sharing models rather than raw data. However, the training of FL requires frequent model transmission, which can result in significant communication overhead. Additionally, vehicles may leave the road side unit (RSU) coverage area before training is completed, leading to training failures. To address these issues, in this letter, we propose a federated distillation-assisted vehicle edge caching scheme based on lightweight denoising diffusion probabilistic model (LDPM). The simulation results demonstrate that the proposed vehicle edge caching scheme has good robustness to variations in vehicle speed, significantly reducing communication overhead and improving cache hit percentage.
Semantic-Aware Cooperative Communication and Computation Framework in Vehicular Networks
Zhang, Jingbo, Ji, Maoxin, Wu, Qiong, Fan, Pingyi, Wang, Kezhi, Chen, Wen
Abstract--Semantic Communication (SC) combined with V e-hicular edge computing (VEC) provides an efficient edge task processing paradigm for Internet of V ehicles (IoV). Focusing on highway scenarios, this paper proposes a Tripartite Cooperative Semantic Communication (TCSC) framework, which enables V e-hicle Users (VUs) to perform semantic task offloading via V ehicle-to-Infrastructure (V2I) and V ehicle-to-V ehicle (V2V) communications. Considering task latency and the number of semantic symbols, the framework constructs a Mixed-Integer Nonlinear Programming (MINLP) problem, which is transformed into two subproblems. First, we innovatively propose a multi-agent proximal policy optimization task offloading optimization method based on parametric distribution noise (MAPPO-PDN) to solve the optimization problem of the number of semantic symbols; second, linear programming (LP) is used to solve offloading ratio. Simulations show that performance of this scheme is superior to that of other algorithms.
AQUILA: A QUIC-Based Link Architecture for Resilient Long-Range UAV Communication
The proliferation of autonomous Unmanned Aerial Vehicles (UAVs) in Beyond Visual Line of Sight (BVLOS) applications is critically dependent on resilient, high-bandwidth, and low-latency communication links. Existing solutions face critical limitations: TCP's head-of-line blocking stalls time-sensitive data, UDP lacks reliability and congestion control, and cellular networks designed for terrestrial users degrade severely for aerial platforms. This paper introduces AQUILA, a cross-layer communication architecture built on QUIC to address these challenges. AQUILA contributes three key innovations: (1) a unified transport layer using QUIC's reliable streams for MAVLink Command and Control (C2) and unreliable datagrams for video, eliminating head-of-line blocking under unified congestion control; (2) a priority scheduling mechanism that structurally ensures C2 latency remains bounded and independent of video traffic intensity; (3) a UAV-adapted congestion control algorithm extending SCReAM with altitude-adaptive delay targeting and telemetry headroom reservation. AQUILA further implements 0-RTT connection resumption to minimize handover blackouts with application-layer replay protection, deployed over an IP-native architecture enabling global operation. Experimental validation demonstrates that AQUILA significantly outperforms TCP- and UDP-based approaches in C2 latency, video quality, and link resilience under realistic conditions, providing a robust foundation for autonomous BVLOS missions.
- North America > United States > New York > New York County > New York City (0.04)
- Asia > China > Jiangxi Province > Nanchang (0.04)
- Asia > China > Beijing > Beijing (0.04)
- Information Technology > Security & Privacy (1.00)
- Telecommunications > Networks (0.69)
PanFoMa: A Lightweight Foundation Model and Benchmark for Pan-Cancer
Huang, Xiaoshui, Zhu, Tianlin, Zuo, Yifan, Xia, Xue, Wu, Zonghan, Yan, Jiebin, Hua, Dingli, Xu, Zongyi, Fang, Yuming, Zhang, Jian
Single-cell RNA sequencing (scRNA-seq) is essential for decoding tumor heterogeneity. However, pan-cancer research still faces two key challenges: learning discriminative and efficient single-cell representations, and establishing a comprehensive evaluation benchmark. In this paper, we introduce PanFoMa, a lightweight hybrid neural network that combines the strengths of Transformers and state-space models to achieve a balance between performance and efficiency. PanFoMa consists of a front-end local-context encoder with shared self-attention layers to capture complex, order-independent gene interactions; and a back-end global sequential feature decoder that efficiently integrates global context using a linear-time state-space model. This modular design preserves the expressive power of Transformers while leveraging the scalability of Mamba to enable transcriptome modeling, effectively capturing both local and global regulatory signals. To enable robust evaluation, we also construct a large-scale pan-cancer single-cell benchmark, PanFoMaBench, containing over 3.5 million high-quality cells across 33 cancer subtypes, curated through a rigorous preprocessing pipeline. Experimental results show that PanFoMa outperforms state-of-the-art models on our pan-cancer benchmark (+4.0\%) and across multiple public tasks, including cell type annotation (+7.4\%), batch integration (+4.0\%) and multi-omics integration (+3.1\%). The code is available at https://github.com/Xiaoshui-Huang/PanFoMa.
- Research Report > Promising Solution (0.66)
- Research Report > New Finding (0.48)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Pharmaceuticals & Biotechnology (1.00)
U.S. moves to deepen minerals supply chain in AI race with China
U.S. moves to deepen minerals supply chain in AI race with China The U.S. is looking to cut its dependence on China. The U.S. will seek agreements with eight allied nations as part of a fresh effort to strengthen supply chains for the computer chips and critical minerals needed for artificial intelligence technology, according to the top State Department official for economic affairs. The initiative, which builds on efforts dating back to the first administration of President Donald Trump, unfolds as the U.S. looks to cut its dependence on China. It will begin with a meeting at the White House on Dec. 12 between the U.S. and counterparts from Japan, South Korea, Singapore, the Netherlands, the U.K., Israel, the United Arab Emirates and Australia, Jacob Helberg, the undersecretary of state for economic affairs, said in an interview. Helberg, a former adviser at Palantir Technologies, said the summit will focus on reaching agreements across the areas of energy, critical minerals, advanced manufacturing semiconductors, AI infrastructure, and transportation logistics.
- North America > United States (1.00)
- Oceania > Australia (0.25)
- Europe > Netherlands (0.25)
- (9 more...)
- Government > Regional Government > North America Government > United States Government (0.91)
- Media (0.74)
- Leisure & Entertainment (0.73)
S^2-KD: Semantic-Spectral Knowledge Distillation Spatiotemporal Forecasting
Wang, Wenshuo, Shen, Yaomin, Tan, Yingjie, Chen, Yihao
Spatiotemporal forecasting often relies on computationally intensive models to capture complex dynamics. Knowledge distillation (KD) has emerged as a key technique for creating lightweight student models, with recent advances like frequency-aware KD successfully preserving spectral properties (i.e., high-frequency details and low-frequency trends). However, these methods are fundamentally constrained by operating on pixel-level signals, leaving them blind to the rich semantic and causal context behind the visual patterns. To overcome this limitation, we introduce S^2-KD, a novel framework that unifies Semantic priors with Spectral representations for distillation. Our approach begins by training a privileged, multimodal teacher model. This teacher leverages textual narratives from a Large Multimodal Model (LMM) to reason about the underlying causes of events, while its architecture simultaneously decouples spectral components in its latent space. The core of our framework is a new distillation objective that transfers this unified semantic-spectral knowledge into a lightweight, vision-only student. Consequently, the student learns to make predictions that are not only spectrally accurate but also semantically coherent, without requiring any textual input or architectural overhead at inference. Extensive experiments on benchmarks like WeatherBench and TaxiBJ+ show that S^2-KD significantly boosts the performance of simple student models, enabling them to outperform state-of-the-art methods, particularly in long-horizon and complex non-stationary scenarios.
A Disentangled Representation Learning Framework for Low-altitude Network Coverage Prediction
Li, Xiaojie, Cai, Zhijie, Qi, Nan, Dong, Chao, Zhu, Guangxu, Ma, Haixia, Wu, Qihui, Jin, Shi
--The expansion of the low-altitude economy has underscored the significance of Low-Altitude Network Coverage (LANC) prediction for designing aerial corridors. While accurate LANC forecasting hinges on the antenna beam patterns of Base Stations (BSs), these patterns are typically proprietary and not readily accessible. Operational parameters of BSs, which inherently contain beam information, offer an opportunity for data-driven low-altitude coverage prediction. However, collecting extensive low-altitude road test data is cost-prohibitive, often yielding only sparse samples per BS. This scarcity results in two primary challenges: imbalanced feature sampling due to limited variability in high-dimensional operational parameters against the backdrop of substantial changes in low-dimensional sampling locations, and diminished generalizability stemming from insufficient data samples. T o overcome these obstacles, we introduce a dual strategy comprising expert knowledge-based feature compression and disentangled representation learning. The former reduces feature space complexity by leveraging communications expertise, while the latter enhances model gen-eralizability through the integration of propagation models and distinct subnetworks that capture and aggregate the semantic representations of latent features. Experimental evaluation confirms the efficacy of our framework, yielding a 7% reduction in error compared to the best baseline algorithm. Real-network validations further attest to its reliability, achieving practical prediction accuracy with MAE errors at the 5 dB level. Xiaojie Li is with the National Mobile Communication Research Laboratory, Southeast University, Nanjing 210096, China, also with the College of Physics, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China, and also with the Shenzhen Research Institute of Big Data, The Chinese University of Hong Kong-Shenzhen, Guangdong 518172, China (e-mail: xiaojieli@nuaa.edu.cn).
- Asia > China > Jiangsu Province > Nanjing (0.65)
- Asia > China > Guangdong Province > Shenzhen (0.44)
- Asia > China > Hong Kong (0.24)
- (5 more...)
- Telecommunications (1.00)
- Information Technology > Networks (0.48)
- Information Technology > Communications > Networks (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
MamTiff-CAD: Multi-Scale Latent Diffusion with Mamba+ for Complex Parametric Sequence
Deng, Liyuan, Bai, Yunpeng, Dai, Yongkang, Huang, Xiaoshui, Gan, Hongping, Huang, Dongshuo, jiacheng, Hao, Shi, Yilei
Parametric Computer-Aided Design (CAD) is crucial in industrial applications, yet existing approaches often struggle to generate long sequence parametric commands due to complex CAD models' geometric and topological constraints. To address this challenge, we propose MamTiff-CAD, a novel CAD parametric command sequences generation framework that leverages a Transformer-based diffusion model for multi-scale latent representations. Specifically, we design a novel autoencoder that integrates Mamba+ and Transformer, to transfer parameterized CAD sequences into latent representations. The Mamba+ block incorporates a forget gate mechanism to effectively capture long-range dependencies. The non-autoregressive Transformer decoder reconstructs the latent representations. A diffusion model based on multi-scale Transformer is then trained on these latent embeddings to learn the distribution of long sequence commands. In addition, we also construct a dataset that consists of long parametric sequences, which is up to 256 commands for a single CAD model. Experiments demonstrate that MamTiff-CAD achieves state-of-the-art performance on both reconstruction and generation tasks, confirming its effectiveness for long sequence (60-256) CAD model generation.
Panther: A Cost-Effective Privacy-Preserving Framework for GNN Training and Inference Services in Cloud Environments
Chen, Congcong, Liu, Xinyu, Huang, Kaifeng, Wei, Lifei, Shi, Yang
Graph Neural Networks (GNNs) have marked significant impact in traffic state prediction, social recommendation, knowledge-aware question answering and so on. As more and more users move towards cloud computing, it has become a critical issue to unleash the power of GNNs while protecting the privacy in cloud environments. Specifically, the training data and inference data for GNNs need to be protected from being stolen by external adversaries. Meanwhile, the financial cost of cloud computing is another primary concern for users. Therefore, although existing studies have proposed privacy-preserving techniques for GNNs in cloud environments, their additional computational and communication overhead remain relatively high, causing high financial costs that limit their widespread adoption among users. To protect GNN privacy while lowering the additional financial costs, we introduce Panther, a cost-effective privacy-preserving framework for GNN training and inference services in cloud environments. Technically, Panther leverages four-party computation to asynchronously executing the secure array access protocol, and randomly pads the neighbor information of GNN nodes. We prove that Panther can protect privacy for both training and inference of GNN models. Our evaluation shows that Panther reduces the training and inference time by an average of 75.28% and 82.80%, respectively, and communication overhead by an average of 52.61% and 50.26% compared with the state-of-the-art, which is estimated to save an average of 55.05% and 59.00% in financial costs (based on on-demand pricing model) for the GNN training and inference process on Google Cloud Platform.
- Information Technology > Services (1.00)
- Information Technology > Security & Privacy (1.00)
- Education > Educational Setting > Higher Education (0.46)